Research Article
Development and Validation of Metacognitive Test in Programming Using Graded Response Model
@INPROCEEDINGS{10.4108/eai.6-10-2022.2327430, author={Ni Made Sri Mertasari and I Made Candiasa}, title={ Development and Validation of Metacognitive Test in Programming Using Graded Response Model}, proceedings={Proceedings of the 5th International Conference on Vocational Education and Technology, IConVET 2022, 6 October 2022, Singaraja, Bali, Indonesia}, publisher={EAI}, proceedings_a={ICONVET}, year={2023}, month={2}, keywords={metacognitive programming graded response model level of difficulty level of discrimination}, doi={10.4108/eai.6-10-2022.2327430} }
- Ni Made Sri Mertasari
I Made Candiasa
Year: 2023
Development and Validation of Metacognitive Test in Programming Using Graded Response Model
ICONVET
EAI
DOI: 10.4108/eai.6-10-2022.2327430
Abstract
Metacognition includes explaining knowledge that has been mastered and choosing strategies to master new knowledge. Therefore, metacognitive measurement in programming is very important because programmed problems constantly change dynamically. This study aims to develop and validate metacognitive tests in programming using the Graded Response Model (GRM). Metacognitive tests in programming were developed from indicators of cognitive knowledge and cognitive regulation in the form of descriptive tests. The probability of participants answering the test items is estimated from the likelihood of participants answering each level of completion of the test items. The levels of work on each item have a graded response, so the analysis is carried out with GRM. GRM considers two characteristics of the item: the level of difficulty and the level of discrimination. The results showed that metacognitive test items in programming were suitable to be developed with the graded responses model (GRM). The probability of participants correctly answering one test item is relevant to the test's ability and level of difficulty. In addition, the difference in the likelihood of answering one test item is also applicable to the differentiating power of the test.